144 research outputs found

    Detection of Carbon Monoxide Using Polymer-Composite Films with a Porphyrin-Functionalized Polypyrrole

    Get PDF
    Post-fire air constituents that are of interest to NASA include CO and some acid gases (HCl and HCN). CO is an important analyte to be able to sense in human habitats since it is a marker for both prefire detection and post-fire cleanup. The need exists for a sensor that can be incorporated into an existing sensing array architecture. The CO sensor needs to be a low-power chemiresistor that operates at room temperature; the sensor fabrication techniques must be compatible with ceramic substrates. Early work on the JPL ElectronicNose indicated that some of the existing polymer-carbon black sensors might be suitable. In addition, the CO sensor based on polypyrrole functionalized with iron porphyrin was demonstrated to be a promising sensor that could meet the requirements. First, pyrrole was polymerized in a ferric chloride/iron porphyrin solution in methanol. The iron porphyrin is 5, 10, 15, 20-tetraphenyl-21H, 23Hporphine iron (III) chloride. This creates a polypyrrole that is functionalized with the porphyrin. After synthesis, the polymer is dried in an oven. Sensors were made from the functionalized polypyrrole by binding it with a small amount of polyethylene oxide (600 MW). This composite made films that were too resistive to be measured in the device. Subsequently, carbon black was added to the composite to bring the sensing film resistivity within a measurable range. A suspension was created in methanol using the functionalized polypyrrole (90% by weight), polyethylene oxide (600,000 MW, 5% by weight), and carbon black (5% by weight). The sensing films were then deposited, like the polymer-carbon black sensors. After deposition, the substrates were dried in a vacuum oven for four hours at 60 C. These sensors showed good response to CO at concentrations over 100 ppm. While the sensor is based on a functionalized pyrrole, the actual composite is more robust and flexible. A polymer binder was added to help keep the sensor material from delaminating from the electrodes, and carbon was added to improve the conductivity of the material

    Co-polymer films for sensors

    Get PDF
    Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed

    System for detecting and estimating concentrations of gas or liquid analytes

    Get PDF
    A sensor system for detecting and estimating concentrations of various gas or liquid analytes. In an embodiment, the resistances of a set of sensors are measured to provide a set of responses over time where the resistances are indicative of gas or liquid sorption, depending upon the sensors. A concentration vector for the analytes is estimated by satisfying a criterion of goodness using the set of responses. Other embodiments are described and claimed

    Weakly supervised approaches for quality estimation

    Full text link

    Binary credal classification under sparsity constraints.

    Get PDF
    Binary classification is a well known problem in statistics. Besides classical methods, several techniques such as the naive credal classifier (for categorical data) and imprecise logistic regression (for continuous data) have been proposed to handle sparse data. However, a convincing approach to the classification problem in high dimensional problems (i.e., when the number of attributes is larger than the number of observations) is yet to be explored in the context of imprecise probability. In this article, we propose a sensitivity analysis based on penalised logistic regression scheme that works as binary classifier for high dimensional cases. We use an approach based on a set of likelihood functions (i.e. an imprecise likelihood, if you like), that assigns a set of weights to the attributes, to ensure a robust selection of the important attributes, whilst training the model at the same time, all in one fell swoop. We do a sensitivity analysis on the weights of the penalty term resulting in a set of sparse constraints which helps to identify imprecision in the dataset

    A novel application of quantile regression for identification of biomarkers exemplified by equine cartilage microarray data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identification of biomarkers among thousands of genes arrayed for disease classification has been the subject of considerable research in recent years. These studies have focused on disease classification, comparing experimental groups of effected to normal patients. Related experiments can be done to identify tissue-restricted biomarkers, genes with a high level of expression in one tissue compared to other tissue types in the body.</p> <p>Results</p> <p>In this study, cartilage was compared with ten other body tissues using a two color array experimental design. Thirty-seven probe sets were identified as cartilage biomarkers. Of these, 13 (35%) have existing annotation associated with cartilage including several well-established cartilage biomarkers. These genes comprise a useful database from which novel targets for cartilage biology research can be selected. We determined cartilage specific Z-scores based on the observed M to classify genes with Z-scores ≥ 1.96 in all ten cartilage/tissue comparisons as cartilage-specific genes.</p> <p>Conclusion</p> <p>Quantile regression is a promising method for the analysis of two color array experiments that compare multiple samples in the absence of biological replicates, thereby limiting quantifiable error. We used a nonparametric approach to reveal the relationship between percentiles of M and A, where M is log<sub>2</sub>(R/G) and A is 0.5 log<sub>2</sub>(RG) with R representing the gene expression level in cartilage and G representing the gene expression level in one of the other 10 tissues. Then we performed linear quantile regression to identify genes with a cartilage-restricted pattern of expression.</p

    Co-polymer Films for Sensors

    Get PDF
    Embodiments include a sensor comprising a co-polymer, the co-polymer comprising a first monomer and a second monomer. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is polystyrene and the second monomer is poly-2-vinyl pyridinium propylamine chloride. For some embodiments, the first monomer is poly-4-vinyl pyridine, and the second monomer is poly-4-vinyl pyridinium benzylamine chloride. Other embodiments are described and claimed

    Gene and pathway identification with Lp penalized Bayesian logistic regression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Identifying genes and pathways associated with diseases such as cancer has been a subject of considerable research in recent years in the area of bioinformatics and computational biology. It has been demonstrated that the magnitude of differential expression does not necessarily indicate biological significance. Even a very small change in the expression of particular gene may have dramatic physiological consequences if the protein encoded by this gene plays a catalytic role in a specific cell function. Moreover, highly correlated genes may function together on the same pathway biologically. Finally, in sparse logistic regression with <it>L</it><sub><it>p </it></sub>(<it>p </it>< 1) penalty, the degree of the sparsity obtained is determined by the value of the regularization parameter. Usually this parameter must be carefully tuned through cross-validation, which is time consuming.</p> <p>Results</p> <p>In this paper, we proposed a simple Bayesian approach to integrate the regularization parameter out analytically using a new prior. Therefore, there is no longer a need for parameter selection, as it is eliminated entirely from the model. The proposed algorithm (BLpLog) is typically two or three orders of magnitude faster than the original algorithm and free from bias in performance estimation. We also define a novel similarity measure and develop an integrated algorithm to hunt the regulatory genes with low expression changes but having high correlation with the selected genes. Pathways of those correlated genes were identified with DAVID <url>http://david.abcc.ncifcrf.gov/</url>.</p> <p>Conclusion</p> <p>Experimental results with gene expression data demonstrate that the proposed methods can be utilized to identify important genes and pathways that are related to cancer and build a parsimonious model for future patient predictions.</p

    Individualized markers optimize class prediction of microarray data

    Get PDF
    BACKGROUND: Identification of molecular markers for the classification of microarray data is a challenging task. Despite the evident dissimilarity in various characteristics of biological samples belonging to the same category, most of the marker – selection and classification methods do not consider this variability. In general, feature selection methods aim at identifying a common set of genes whose combined expression profiles can accurately predict the category of all samples. Here, we argue that this simplified approach is often unable to capture the complexity of a disease phenotype and we propose an alternative method that takes into account the individuality of each patient-sample. RESULTS: Instead of using the same features for the classification of all samples, the proposed technique starts by creating a pool of informative gene-features. For each sample, the method selects a subset of these features whose expression profiles are most likely to accurately predict the sample's category. Different subsets are utilized for different samples and the outcomes are combined in a hierarchical framework for the classification of all samples. Moreover, this approach can innately identify subgroups of samples within a given class which share common feature sets thus highlighting the effect of individuality on gene expression. CONCLUSION: In addition to high classification accuracy, the proposed method offers a more individualized approach for the identification of biological markers, which may help in better understanding the molecular background of a disease and emphasize the need for more flexible medical interventions

    Graphical modeling of binary data using the LASSO: a simulation study

    Get PDF
    Background: Graphical models were identified as a promising new approach to modeling high-dimensional clinical data. They provided a probabilistic tool to display, analyze and visualize the net-like dependence structures by drawing a graph describing the conditional dependencies between the variables. Until now, the main focus of research was on building Gaussian graphical models for continuous multivariate data following a multivariate normal distribution. Satisfactory solutions for binary data were missing. We adapted the method of Meinshausen and Buhlmann to binary data and used the LASSO for logistic regression. Objective of this paper was to examine the performance of the Bolasso to the development of graphical models for high dimensional binary data. We hypothesized that the performance of Bolasso is superior to competing LASSO methods to identify graphical models. Methods: We analyzed the Bolasso to derive graphical models in comparison with other LASSO based method. Model performance was assessed in a simulation study with random data generated via symmetric local logistic regression models and Gibbs sampling. Main outcome variables were the Structural Hamming Distance and the Youden Index. We applied the results of the simulation study to a real-life data with functioning data of patients having head and neck cancer. Results: Bootstrap aggregating as incorporated in the Bolasso algorithm greatly improved the performance in higher sample sizes. The number of bootstraps did have minimal impact on performance. Bolasso performed reasonable well with a cutpoint of 0.90 and a small penalty term. Optimal prediction for Bolasso leads to very conservative models in comparison with AIC, BIC or cross-validated optimal penalty terms. Conclusions: Bootstrap aggregating may improve variable selection if the underlying selection process is not too unstable due to small sample size and if one is mainly interested in reducing the false discovery rate. We propose using the Bolasso for graphical modeling in large sample sizes
    corecore